19 research outputs found

    Real-Time Panoramic Tracking for Event Cameras

    Full text link
    Event cameras are a paradigm shift in camera technology. Instead of full frames, the sensor captures a sparse set of events caused by intensity changes. Since only the changes are transferred, those cameras are able to capture quick movements of objects in the scene or of the camera itself. In this work we propose a novel method to perform camera tracking of event cameras in a panoramic setting with three degrees of freedom. We propose a direct camera tracking formulation, similar to state-of-the-art in visual odometry. We show that the minimal information needed for simultaneous tracking and mapping is the spatial position of events, without using the appearance of the imaged scene point. We verify the robustness to fast camera movements and dynamic objects in the scene on a recently proposed dataset and self-recorded sequences.Comment: Accepted to International Conference on Computational Photography 201

    PetroSurf3D - A Dataset for high-resolution 3D Surface Segmentation

    Full text link
    The development of powerful 3D scanning hardware and reconstruction algorithms has strongly promoted the generation of 3D surface reconstructions in different domains. An area of special interest for such 3D reconstructions is the cultural heritage domain, where surface reconstructions are generated to digitally preserve historical artifacts. While reconstruction quality nowadays is sufficient in many cases, the robust analysis (e.g. segmentation, matching, and classification) of reconstructed 3D data is still an open topic. In this paper, we target the automatic and interactive segmentation of high-resolution 3D surface reconstructions from the archaeological domain. To foster research in this field, we introduce a fully annotated and publicly available large-scale 3D surface dataset including high-resolution meshes, depth maps and point clouds as a novel benchmark dataset to the community. We provide baseline results for our existing random forest-based approach and for the first time investigate segmentation with convolutional neural networks (CNNs) on the data. Results show that both approaches have complementary strengths and weaknesses and that the provided dataset represents a challenge for future research.Comment: CBMI Submission; Dataset and more information can be found at http://lrs.icg.tugraz.at/research/petroglyphsegmentation

    Variational segmentation of elongated volumetric structures

    No full text
    We present an interactive approach for segmenting thin volumetric structures. The proposed segmentation model is based on an anisotropic weighted Total Variation energy with a global volumetric constraint and is minimized using an efficient numerical approach and a convex relaxation. The algorithm is globally optimal w.r.t. the relaxed problem for any volumetric constraint. The binary solution of the relaxed problem equals the globally optimal solution of the original problem. Implemented on today’s user-programmable graphics cards, it allows real-time user interaction. The method is applied to and evaluated on the task of articular cartilage segmentation of human knee joints and segmentation of tubular structures like liver vessels and airway trees. (a) Geodesic Active Con- (b) Proposed segmentatour model tion model Figure 1: Segmentation results for articular cartilage of a human knee joint. (a) is obtained using a standard GAC segmentation model, (b) is obtained by incorporating edge direction information into the segmentation model. 1

    İlâhiyat Fakültesi dergisi / Ankara Üniversitesi : = Journal of the Faculty of Divinity of Ankara University

    Get PDF
    Event cameras or neuromorphic cameras mimic the human perception system as they measure the per-pixel intensity change rather than the actual intensity level. In contrast to traditional cameras, such cameras capture new information about the scene at MHz frequency in the form of sparse events. The high temporal resolution comes at the cost of losing the familiar per-pixel intensity information. In this work we propose a variational model that accurately models the behaviour of event cameras, enabling reconstruction of intensity images with arbitrary frame rate in real-time. Our method is formulated on a per-event-basis, where we explicitly incorporate information about the asynchronous nature of events via an event manifold induced by the relative timestamps of events. In our experiments we verify that solving the variational model on the manifold produces high-quality images without explicitly estimating optical flow.Comment: Accepted to BMVC 2016 as oral presentation, 12 page
    corecore